33 research outputs found
Distributed Deterministic Broadcasting in Uniform-Power Ad Hoc Wireless Networks
Development of many futuristic technologies, such as MANET, VANET, iThings,
nano-devices, depend on efficient distributed communication protocols in
multi-hop ad hoc networks. A vast majority of research in this area focus on
design heuristic protocols, and analyze their performance by simulations on
networks generated randomly or obtained in practical measurements of some
(usually small-size) wireless networks. %some library. Moreover, they often
assume access to truly random sources, which is often not reasonable in case of
wireless devices. In this work we use a formal framework to study the problem
of broadcasting and its time complexity in any two dimensional Euclidean
wireless network with uniform transmission powers. For the analysis, we
consider two popular models of ad hoc networks based on the
Signal-to-Interference-and-Noise Ratio (SINR): one with opportunistic links,
and the other with randomly disturbed SINR. In the former model, we show that
one of our algorithms accomplishes broadcasting in rounds, where
is the number of nodes and is the diameter of the network. If nodes
know a priori the granularity of the network, i.e., the inverse of the
maximum transmission range over the minimum distance between any two stations,
a modification of this algorithm accomplishes broadcasting in
rounds.
Finally, we modify both algorithms to make them efficient in the latter model
with randomly disturbed SINR, with only logarithmic growth of performance.
Ours are the first provably efficient and well-scalable, under the two
models, distributed deterministic solutions for the broadcast task.Comment: arXiv admin note: substantial text overlap with arXiv:1207.673
Deterministic Digital Clustering of Wireless Ad Hoc Networks
We consider deterministic distributed communication in wireless ad hoc
networks of identical weak devices under the SINR model without predefined
infrastructure. Most algorithmic results in this model rely on various
additional features or capabilities, e.g., randomization, access to geographic
coordinates, power control, carrier sensing with various precision of
measurements, and/or interference cancellation. We study a pure scenario, when
no such properties are available. As a general tool, we develop a deterministic
distributed clustering algorithm. Our solution relies on a new type of
combinatorial structures (selectors), which might be of independent interest.
Using the clustering, we develop a deterministic distributed local broadcast
algorithm accomplishing this task in rounds, where
is the density of the network. To the best of our knowledge, this is
the first solution in pure scenario which is only polylog away from the
universal lower bound , valid also for scenarios with
randomization and other features. Therefore, none of these features
substantially helps in performing the local broadcast task. Using clustering,
we also build a deterministic global broadcast algorithm that terminates within
rounds, where is the diameter of the
network. This result is complemented by a lower bound , where is the path-loss parameter of the
environment. This lower bound shows that randomization or knowledge of own
location substantially help (by a factor polynomial in ) in the global
broadcast. Therefore, unlike in the case of local broadcast, some additional
model features may help in global broadcast
On Packet Scheduling with Adversarial Jamming and Speedup
In Packet Scheduling with Adversarial Jamming packets of arbitrary sizes
arrive over time to be transmitted over a channel in which instantaneous
jamming errors occur at times chosen by the adversary and not known to the
algorithm. The transmission taking place at the time of jamming is corrupt, and
the algorithm learns this fact immediately. An online algorithm maximizes the
total size of packets it successfully transmits and the goal is to develop an
algorithm with the lowest possible asymptotic competitive ratio, where the
additive constant may depend on packet sizes.
Our main contribution is a universal algorithm that works for any speedup and
packet sizes and, unlike previous algorithms for the problem, it does not need
to know these properties in advance. We show that this algorithm guarantees
1-competitiveness with speedup 4, making it the first known algorithm to
maintain 1-competitiveness with a moderate speedup in the general setting of
arbitrary packet sizes. We also prove a lower bound of on
the speedup of any 1-competitive deterministic algorithm, showing that our
algorithm is close to the optimum.
Additionally, we formulate a general framework for analyzing our algorithm
locally and use it to show upper bounds on its competitive ratio for speedups
in and for several special cases, recovering some previously known
results, each of which had a dedicated proof. In particular, our algorithm is
3-competitive without speedup, matching both the (worst-case) performance of
the algorithm by Jurdzinski et al. and the lower bound by Anta et al.Comment: Appeared in Proc. of the 15th Workshop on Approximation and Online
Algorithms (WAOA 2017
Large-scale phylogenomics of aquatic bacteria reveal molecular mechanisms for adaptation to salinity
The crossing of environmental barriers poses major adaptive challenges. Rareness of freshwater-marine transi-tions separates the bacterial communities, but how these are related to brackish counterparts remains elusive, as do the molecular adaptations facilitating cross-biome transitions. We conducted large-scale phylogenomic analysis of freshwater, brackish, and marine quality-filtered metagenome-assembled genomes (11,248). Average nucleotide identity analyses showed that bacterial species rarely existed in multiple biomes. In contrast, distinct brackish basins cohosted numerous species, but their intraspecific population structures displayed clear signs of geographic separation. We further identified the most recent cross-biome transitions, which were rare, ancient, and most commonly directed toward the brackish biome. Transitions were accompanied by systematic changes in amino acid composition and isoelectric point distributions of inferred proteomes, which evolved over millions of years, as well as convergent gains or losses of specific gene functions. Therefore, adaptive chal-lenges entailing proteome reorganization and specific changes in gene content constrains the cross-biome tran-sitions, resulting in species-level separation between aquatic biomes
Undirected Graphs of Entanglement Two
Entanglement is a complexity measure of directed graphs that origins in fixed
point theory. This measure has shown its use in designing efficient algorithms
to verify logical properties of transition systems. We are interested in the
problem of deciding whether a graph has entanglement at most k. As this measure
is defined by means of games, game theoretic ideas naturally lead to design
polynomial algorithms that, for fixed k, decide the problem. Known
characterizations of directed graphs of entanglement at most 1 lead, for k = 1,
to design even faster algorithms. In this paper we present an explicit
characterization of undirected graphs of entanglement at most 2. With such a
characterization at hand, we devise a linear time algorithm to decide whether
an undirected graph has this property
Common arterial trunk and ventricular non-compaction in Lrp2 knockout mice indicate a crucial role of LRP2 in cardiac development
Lipoprotein-related receptor protein 2 (LRP2) is important for development of the embryonic neural crest and brain in both mice and humans. Although a role in cardiovascular development can be expected, the hearts of Lrp2 knockout (KO) mice have not yet been investigated. We studied the cardiovascular development of Lrp2 KO mice between embryonic day 10.5 (E10.5) and E15.5, applying morphometry and immunohistochemistry, using antibodies against Tfap2α (neural crest cells), Nkx2.5 (second heart field), WT1 (epicardium derived cells), tropomyosin (myocardium) and LRP2. The Lrp2 KO mice display a range of severe cardiovascular abnormalities, including aortic arch anomalies, common arterial trunk (persistent truncus arteriosus) with coronary artery anomalies, ventricular septal defects, overriding of the tricuspid valve and marked thinning of the ventricular myocardium. Both the neural crest cells and second heart field, which are essential for the lengthening and growth of the right ventricular outflow tract, are abnormally positioned in the Lrp2 KO. T hi s explains the absence of the aorto-pulmonary septum, which leads to common arterial trunk and ventricular septal defects. Severe blebbing of the epicardial cells covering the ventricles is seen. Epithelial-mesenchymal transition does occur; however, there are fewer WT1-positive epicardium-derived cells in the ventricular wall as compared to normal, coinciding with the myocardial thinning and deep intertrabecular spaces. LRP2 plays a crucial role in cardiovascular development in mice. This corroborates findings of cardiac anomalies in humans with LRP2 mutations. Future studies should reveal the underlying signaling mechanisms in which LRP2 is involved during cardiogenesis
Oink: an Implementation and Evaluation of Modern Parity Game Solvers
Parity games have important practical applications in formal verification and
synthesis, especially to solve the model-checking problem of the modal
mu-calculus. They are also interesting from the theory perspective, as they are
widely believed to admit a polynomial solution, but so far no such algorithm is
known. In recent years, a number of new algorithms and improvements to existing
algorithms have been proposed. We implement a new and easy to extend tool Oink,
which is a high-performance implementation of modern parity game algorithms. We
further present a comprehensive empirical evaluation of modern parity game
algorithms and solvers, both on real world benchmarks and randomly generated
games. Our experiments show that our new tool Oink outperforms the current
state-of-the-art.Comment: Accepted at TACAS 201
Effects of an early life diet containing large phospholipid-coated lipid globules on hepatic lipid metabolism in mice
We recently reported that feeding mice in their early life a diet containing a lipid structure more similar to human milk (eIMF, Nuturis) results in lower body weights and fat mass gain upon high fat feeding in later life, compared to control (cIMF). To understand the underlying mechanisms, we now explored parameters possibly involved in this long-term effect. Male C57BL/6JOlaHsd mice, fed rodent diets containing eIMF or cIMF from postnatal (PN) day 16–42, were sacrificed at PN42. Hepatic proteins were measured using targeted proteomics. Lipids were assessed by LC–MS/MS (acylcarnitines) and GC-FID (fatty-acyl chain profiles). Early life growth and body composition, cytokines, and parameters of bile acid metabolism were similar between the groups. Hepatic concentrations of multiple proteins involved in β-oxidation (+ 17%) the TCA cycle (+ 15%) and mitochondrial antioxidative proteins (+ 28%) were significantly higher in eIMF versus cIMF-fed mice (p < 0.05). Hepatic l-carnitine levels, required for fatty acid uptake into the mitochondria, were higher (+ 33%, p < 0.01) in eIMF-fed mice. The present study indicates that eIMF-fed mice have higher hepatic levels of proteins involved in fatty acid metabolism and oxidation. We speculate that eIMF feeding programs the metabolic handling of dietary lipids
Model Checking Probabilistic Real-Time Properties for Service-Oriented Systems with Service Level Agreements
The assurance of quality of service properties is an important aspect of
service-oriented software engineering. Notations for so-called service level
agreements (SLAs), such as the Web Service Level Agreement (WSLA) language,
provide a formal syntax to specify such assurances in terms of (legally
binding) contracts between a service provider and a customer. On the other
hand, formal methods for verification of probabilistic real-time behavior have
reached a level of expressiveness and efficiency which allows to apply them in
real-world scenarios. In this paper, we suggest to employ the recently
introduced model of Interval Probabilistic Timed Automata (IPTA) for formal
verification of QoS properties of service-oriented systems. Specifically, we
show that IPTA in contrast to Probabilistic Timed Automata (PTA) are able to
capture the guarantees specified in SLAs directly. A particular challenge in
the analysis of IPTA is the fact that their naive semantics usually yields an
infinite set of states and infinitely-branching transitions. However, using
symbolic representations, IPTA can be analyzed rather efficiently. We have
developed the first implementation of an IPTA model checker by extending the
PRISM tool and show that model checking IPTA is only slightly more expensive
than model checking comparable PTA.Comment: In Proceedings INFINITY 2011, arXiv:1111.267
Computer aided synthesis: a game theoretic approach
In this invited contribution, we propose a comprehensive introduction to game
theory applied in computer aided synthesis. In this context, we give some
classical results on two-player zero-sum games and then on multi-player non
zero-sum games. The simple case of one-player games is strongly related to
automata theory on infinite words. All along the article, we focus on general
approaches to solve the studied problems, and we provide several illustrative
examples as well as intuitions on the proofs.Comment: Invitation contribution for conference "Developments in Language
Theory" (DLT 2017